162 research outputs found

    Understanding the Error Behavior of Complex Critical Software Systems through Field Data

    Get PDF
    Software systems are the basis for human everyday activities, which are increasingly dependent on software. Software is an integral part of systems we interact with in our daily life raging form small systems for entertainment and domotics, to large systems and infrastructures that provide fundamental services such as telecommunication, transportation, and financial. In particular, software systems play a key role in the context of critical domains, supporting crucial activities. For example, ground and air transportation, power supply, nuclear plants, and medical applications strongly rely on software systems: failures affecting these systems can lead to severe consequences, which can be catastrophic in terms of business or, even worse, human losses. Therefore, given the growing dependence on software systems in life- and critical-applications, dependability, has become among one of the most relevant industry and research concerns in the last decades. Software faults have been recognized as one of the major cause for system failures since the hardware failure rate has been decreasing over the years. Time and cost constraints, along with technical limitations, often do not allow to fully validate the correctness of the software solely by means of testing; therefore, software might be released with residual faults that activate during operations. The activation of a fault generates errors which propagate through the components of the system, possibly leading to a failure. Therefore, in order to produce reliable software, it is important to understand how errors affect a software system. This is of paramount importance especially in the context of complex critical software systems, where the occurrence of a failure can lead to severe consequences. However, the analysis of the error behavior of this kind of system is not trivial. They are often distributed systems based on many interacting heterogeneous components and layers, including Off-The-Shelf (OTS), third party components and legacy systems. All these aspects, undermine the understanding of the error behavior of complex critical software system. A well established methodology to evaluate the dependability of operational systems and to identify their dependability bottlenecks is represented by field failure data analysis (FFDA), which is based on the monitoring and recording of errors and failures occurred during the operational phase of the system under real workload conditions, i.e., field data. Indeed, direct measurement and analysis of natural failures occurring under real workload conditions is among the most accurate ways to assess dependability characteristics. One of the main sources of field data, are monitoring techniques. The contribution of the thesis is to provide a methodology that allows understanding the error behavior of complex critical software systems by means of field data generated by the monitoring techniques already implemented in the target system. The use of available monitoring techniques allows to overcome the limitations imposed in the context of critical systems, avoiding severe changes in the system, and preserving its functionality and performance. The methodology is based on fault injection experiments that stimulate the target system with different error conditions. Injection experiments allow to accelerate the collection of error data naturally generated by the monitoring techniques already implemented in the system. The collected data are analyzed in order to characterize the behavior of the system under the occurred software errors. To this aim, the proposed methodology leverages a set of innovative means defined in this dissertation, i.e., (i) Error Propagation graphs, which allow to analyze the error propagation phenomena occurred in the target system and that can be inferred by the collected field data, and a set of metrics composed by (ii) Error Determination Degree, which allows gaining insights into the ability of error notifications of a monitoring technique to suggest either the fault that led to the error, or the failure the error led to in the system, (iii) Error Propagation Reportability, which allow understanding the ability of a monitoring technique at reporting the propagation of errors, and (iv) Data Dissimilarity, which allows gaining insights into the suitability of the data generated by the monitoring techniques for failure analysis. The methodology has been experimented on two instances of complex critical software systems in the field of Air Traffic Control (ATC), i.e., a communication middleware supporting data exchanging among ATC applications, and an arrival manager that is responsible for managing flight arrivals to a given airspace, within an industry-academia collaboration in the context of a national research project. Results show that field data generated by means of monitoring techniques already implemented in a complex critical software system can be leveraged to obtain insights about the error behavior exhibited by the target system, as well as about the potential beneficial locations for EDMs and ERMs. In addition, the proposed methodology also allowed to characterize the effectiveness of the monitoring techniques in terms of failure reporting, error propagation reportability, and data dissimilarity

    Seismic Performance of Sheathed Cold-formed Shear Walls

    Get PDF
    The paper presents and discusses the results of a research on the seismic behaviour of cold-formed steel stud shear walls, sheathed with wood-based (oriented strand board) and gypsum-based (wallboard) panels. Within this activity, this paper provides the outcomes of the results of experimental (capacity evaluation) and theoretical (demand evaluation) phases of the research. Moreover, a contribution is given for the .evaluation of the strength reduction factor of this structural typology

    The relation between cash flows and economic performance in the digital age: An empirical analysis

    Get PDF
    Cash flows analysis plays an increasingly important role in the study of business dynamics since cash flows play a key role in the company's economic performance, not only from a standpoint but also in predictive terms. The literature on the subject is poor in number and depth of research, the samples analyzed so far are limited and the statistical tools are weak. Retracing the steps of past research, we studied the relationships between cash flows of several management areas and economic performance, using a complete sample of Italian listed companies in the 2008-2017 period with more solid statistical tools compared to previous studies. The database used to collect all the balance sheet data necessary to conduct our research is Amadeus of the Bureau Van Dijk platform, which already shows reclassified and easily comparable financial statements. Correlation and multiple regression analysis were used to assess if our cash flow proxies could be strong predictors of future cash flow and, consequently, of business performance. The flows for investments and the ability to generate cash, where the latter is positively correlated with future profitability, manage to explain, together with the net cash generation of the company, a large part of the variability of the operating income produced in subsequent periods. The flows from investments seem to be the most suitable for correctly classifying the most profitable companies in the medium-long term, while cash generation, deriving from the characteristic activity, contributes to providing answers, about corporate profitability, on shorter time horizons

    Design Tools for Bolted End-Plate Beam-to-Column Joints

    Get PDF
    Predicting the response of beam-to-column joints is essential to evaluate the response of moment frames. The well-known component method is based on a mechanical modelling of the joint, through joint subdivision into more elementary components subsequently reassembled together to obtain the whole joint characteristics. Significant advantages of the component method are the following: (i) the mechanics-based modelling approach; (ii) the easier general characteristics of components. However, the method is commonly perceived by practicing engineers as being too laborious for practical applications. Within this context, this paper summarizes the results of a theoretical study aiming to develop simplified analysis tools for bolted end-plate beam-to-column joints, based on the Eurocode 3 component method. The accuracy of the component method was first evaluated, by comparing theoretical predictions of the plastic resistance and initial stiffness with corresponding experimental data collected from the available literature. Subsequently, design/analysis charts were developed through a parametric application of the component method by means of automatic calculation tools. They are easy and quick tools to be used in the first phases of the design process, in order to identify joint configurations and geometrical properties satisfying specified joint structural performances. The parametric analysis allowed also identifying further simplified analytical tools, in the form of nondimensional equations for predicting quickly the joint structural properties. With reference to selected geometries, the approximate equations were verified to provide sufficiently accurate predictions of both the stiffness and the resistance of the examined beam-to-column joints

    Achieving Isolation in Mixed-Criticality Industrial Edge Systems with Real-Time Containers Appendix

    Get PDF
    Real-time containers are a promising solution to reduce latencies in time-sensitive cloud systems. Recent efforts are emerging to extend their usage in industrial edge systems with mixed-criticality constraints. In these contexts, isolation becomes a major concern: a disturbance (such as timing faults or unexpected overloads) affecting a container must not impact the behavior of other containers deployed on the same hardware. In this paper, we propose a novel architectural solution to achieve isolation in real-time containers, based on real-time co-kernels, hierarchical scheduling, and time-division networking. The architecture has been implemented on Linux patched with the Xenomai co-kernel, extended with a new hierarchical scheduling policy, named SCHED_DS, and integrating the RTNet stack. Experimental results are promising in terms of overhead and latency compared to other Linux-based solutions. More importantly, the isolation of containers is guaranteed even in presence of severe co-located disturbances, such as faulty tasks (elapsing more time than declared) or high CPU, network, or I/O stress on the same machine

    Technical Report: Anomaly Detection for a Critical Industrial System using Context, Logs and Metrics

    Get PDF
    Recent advances in contextual anomaly detection attempt to combine resource metrics and event logs to un- cover unexpected system behaviors and malfunctions at run- time. These techniques are highly relevant for critical software systems, where monitoring is often mandated by international standards and guidelines. In this technical report, we analyze the effectiveness of a metrics-logs contextual anomaly detection technique in a middleware for Air Traffic Control systems. Our study addresses the challenges of applying such techniques to a new case study with a dense volume of logs, and finer monitoring sampling rate. We propose an automated abstraction approach to infer system activities from dense logs and use regression analysis to infer the anomaly detector. We observed that the detection accuracy is impacted by abrupt changes in resource metrics or when anomalies are asymptomatic in both resource metrics and event logs. Guided by our experimental results, we propose and evaluate several actionable improvements, which include a change detection algorithm and the use of time windows on contextual anomaly detection. This technical report accompanies the paper “Contextual Anomaly Detection for a Critical Industrial System based on Logs and Metrics” [1] and provides further details on the analysis method, case study and experimental results

    Seismic response of Cfs strap-braced stud walls: Theoretical study

    Get PDF
    The use of cold-formed steel (CFS) profiles in low-rise residential buildings has increased in European construction sector. The reason of this interest is related to potentialities offered by this constructive system, which are the high structural performance, lightness, short construction time, durability and eco-efficiency. Nevertheless, the current structural codes, such as Eurocodes, do not provide enough information about the seismic design of this structural typology. In an effort to investigate the seismic response of CFS structures, a theoretical and experimental research has been carried out at University of Naples Federico II, with the main aim to support the spreading of these systems in seismic areas. This study focuses on an “all-steel design” solution in which strap-braced stud walls are the main lateral resisting system. In the present paper the outcomes of theoretical phase are shown with the aim of defining the criteria for the seismic design of such structures. In particular, a critical analysis of the requirements for CFS systems provided by the American code AISI S213 has been carried out by comparing it with those given by Eurocodes for traditional braced steel frames

    SIMBIO-Sim: a performance simulator for the SIMBIO-SYS suite on board the BepiColombo mission

    Get PDF
    The SIMBIO-SYS simulator is a useful tool to test the instrument performance and to predict the instrument behaviour during the whole scientific mission. It has been developed with Interactive Data Language (IDL), and it give three groups of output data: i) the geometrical quantities related to the spacecraft and the channels, which include both the general information about the spacecraft and the information for each filter; ii) the radiometric outputs, which include the planet reflectance, the radiance and the expected signal measured by the detector; iii) the quantities related to the channel performance, which are for example the integration time (IT), which has to be defined to avoid the detector saturation, the expected dark current of the detector

    Development of a simulator of the SIMBIOSYS suite onboard the BepiColombo mission

    Get PDF
    BepiColombo is the fifth cornerstone mission of the European Space Agency (ESA) dedicated to study the Mercury planet. The BepiColombo spacecraft comprises two science modules: the Mercury Planetary Orbiter (MPO) realized by ESA and the Mercury Magnetospheric Orbiter provided by the Japan Aerospace Exploration Agency. The MPO is composed by 11 instruments, including the 'Spectrometer and Imagers for MPO BepiColombo Integrated Observatory System' (SIMBIOSYS). The SIMBIOSYS suite includes three optical channels: a Stereoscopic Imaging Channel, a High Resolution Imaging Channel, and a Visible and near Infrared Hyperspectral Imager. SIMBIOSYS will characterize the hermean surface in terms of surface morphology, volcanism, global tectonics, and chemical composition. The aim of this work is to describe a tool for the radiometric response prediction of the three SIMBIOSYS channels. Given the spectral properties of the surface, the instrument characteristics, and the geometrical conditions of the observation, the realized SIMBIOSYS simulator is capable of estimating the expected signal and integration times for the entire mission lifetime. In the simulator the spectral radiance entering the instrument optical apertures has been modelled using a Hapke reflectance model implementing the parameters expected for the hermean surface. The instrument performances are simulated by means of calibrated optical and detectors responses. The simulator employs the SPICE (Spacecraft, Planet, Instrument, C-matrix, Environment) toolkit software, which allows us to know for each epoch the exact position of the MPO with respect to the planet surface and the Sun
    • …
    corecore